Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
                                            Some full text articles may not yet be available without a charge during the embargo (administrative interval).
                                        
                                        
                                        
                                            
                                                
                                             What is a DOI Number?
                                        
                                    
                                
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
- 
            We have implemented GPU-aware support across all AWP-ODC versions and enhanced message-passing collective communications for this memory-bound finite-difference solver. This provides cutting-edge communication support for production simulations on leadership-class computing facilities, including OLCF Frontier and TACC Vista. We achieved significant performance gains, reaching 37 sustained Petaflop/s and reducing time-to-solution by 17.2% using the GPU-aware feature on 8,192 Frontier nodes, or 65,336 MI250X GCDs. The AWP-ODC code has also been optimized for TACC Vista, an Arm-based NVIDIA GH200 Grace Hopper Superchip, demonstrating excellent application performance. This poster will showcase studies and GPU performance characteristics. We will discuss our verification of GPU-aware development and the use of high-performance MVAPICH libraries, including on-the-fly compression, on modern GPU clusters.more » « lessFree, publicly-accessible full text available September 10, 2026
- 
            The SCEC CyberShake platform implements a repeatable scientific workflow to perform 3D physics-based probabilistic seismic hazard analysis (PSHA). Earlier this year we calculated CyberShake Study 24.8 for the San Francisco Bay Area. Study 24.8 includes both low-frequency and broadband PSHA models, calculated at 315 sites. This study required building a regional velocity model from existing 3D models, with a near-surface low-velocity taper and a minimum Vs of 400 m/s. Pegasus-WMS managed the execution of Study 24.8 for 45 days on the OLCF Frontier and TACC Frontera systems. 127 million seismograms and 34 billion intensity measures were produced and automatically transferred to SCEC storage. Study 24.8 used a HIP language implementation of the AWP-ODC wave propagation code on AMD-GPU Frontier nodes to produce strain Green tensors, which were convolved with event realizations to synthesize seismograms. Seismograms were processed to derive data products such as intensity measures, site-specific hazard curves and regional hazard maps. CyberShake combines 3D low-frequency deterministic (≤1 Hz) simulations with high-frequency calculations using stochastic modules from the Broadband Platform to produce results up to 25 Hz, with validation performed using historical events. New CyberShake data products from this study include vertical seismograms, vertical response spectra, and period-dependent significant durations. The presented results include comparisons of hazard estimates between Study 24.8, the previous CyberShake study for this region (18.8), and the NGA-West2 ground motion models (GMMs). We find that Study 24.8 shows overall lower hazard than 18.8, likely due to changes in rupture coherency, with the exception of a few regions: 24.8 shows higher hazard than both the GMMs and 18.8 at long periods in the Livermore area, due to deepening of the Livermore basin in the velocity model, as well as higher hazard east of San Pablo Bay and south of San Jose. At high frequencies, Study 24.8 hazard is lower than that of the GMMs, reflecting reduced variability in the stochastic components. We are also using CyberShake ground motion data to investigate the effects of preferred rupture directions on site-specific hazard. By default, PSHA hazard products assume all events on a given fault and magnitude are equally likely, but by varying these probabilities we can examine the effects of preferred rupture directions on given faults on CyberShake hazard estimates.more » « lessFree, publicly-accessible full text available September 10, 2026
- 
            We have ported and verified the topography version of AWP-ODC, with discontinuous mesh feature enabled, to HIP so that it runs on AMD MI250X GPUs. 103.3% parallel efficiency was benchmarked on Frontier between 8 and 4,096 nodes or up to 32,768 GCDs. Frontier is a two exaflop/s computing system based on the AMD Radeon Instinct GPUs and EPYC CPUs, a Leadership Computing Facility at Oak Ridge National Laboratory (ORNL). This HIP topography code has been used in the production runs on Frontier, a primary computing engine currently utilizing the 2024 SCEC INCITE allocation, a 700K node-hours supercomputing time award. Furthermore, we implemented ROCm-Aware GPU direct support in the topo code, and demonstrated 14% additional reduction in time-to-solution up to 4,096 nodes. The AWP-ODC-Topo code is also tuned on TACC Vista, an Arm-based NVIDIA GH200 Grace Hopper Superchip, with excellent performance demonstrated. This poster will demonstrate the studies of weak scaling and the performance characteristics on GPUs. We discuss the efforts of verifying the ROCm-Aware development, and utilizing high-performance MVAPICH libraries with the on-the-fly compression on modern GPU clusters.more » « less
- 
            AWP-ODC is a 4th-order finite difference code used by the SCEC community for linear wave propagation, Iwan-type nonlinear dynamic rupture and wave propagation, and Strain Green Tensor simulation. We have ported and verified the CUDA-version of AWP-ODC-SGT, a reciprocal version used in the SCEC CyberShake project, to HIP so that it can also run on AMD GPUs. This code achieved sustained 32.6 Petaflop/s performance and 95.6% parallel efficiency at full scale on Frontier, a Leadership Computing Facility at Oak Ridge National Laboratory. The readiness of this community software on AMD Radeon Instinct GPUs and EPYC CPUs allows SCEC to take advantage of exascale systems to produce more realistic ground motions and accurate seismic hazard products. We have also deployed AWP-ODC to Azure to leverage the array of tools and services that Azure provides for tightly coupled HPC simulation on commercial cloud. We collaborated with Internet 2/Azure Accelerator supporting team, as part of Microsoft Internet2/Azure Accelerator for Research Fall 2022 Program, with Azure credits awarded through Cloudbank, an NSF-funded initiative. We demonstrate the AWP performance with a benchmark of ground motion simulation on various GPU based cloud instances, and a comparison of the cloud solution to on-premises bare-metal systems. AWP-ODC currently achieves excellent speedup and efficiency on CPU and GPU architectures. The Iwan-type dynamic rupture and wave propagation solver faces significant challenges, however, due to the increased computational workload with the number of yield surfaces chosen. Compared to linear solution, the Iwan model adds 10x-30x more computational time plus 5x-13x more memory consumption that require substantial code changes to obtain excellent performance. Supported by NSF’s Characteristic Science Applications (CSA) program for the Leadership-Class Computing Facility (LCCF) at Texas Advanced Computing Center (TACC), we are porting and improving the performance of this nonlinear AWP-ODC software, preparing for the next generation NSF LCCF system called Horizon, to be installed at TACC. During Texascale days on the current TACC’s Frontera, we carried out an Iwan-type nonlinear dynamic rupture and wave propagation simulation of a Mw7.8 scenario earthquake on the southern San Andreas fault. This simulation modeled 83 seconds of rupture with a grid spacing of 25 m to resolve frequencies up to 4 Hz with a minimum shear-wave velocity of 500 m/s.more » « less
 An official website of the United States government
An official website of the United States government 
				
			 
					 
					
 
                                     Full Text Available
                                                Full Text Available